Understanding and combating robust overfitting via input loss landscape analysis and regularization

نویسندگان

چکیده

Adversarial training is widely used to improve the robustness of deep neural networks adversarial attack. However, prone overfitting, and cause far from clear. This work sheds light on mechanisms underlying overfitting through analyzing loss landscape w.r.t. input. We find that robust results standard training, specifically minimization clean loss, can be mitigated by regularization gradients. Moreover, we turns severer during partially because gradient effect becomes weaker due increase in landscapes curvature. To generalization, propose a new regularizer smooth penalizing weighted logits variation along direction. Our method significantly mitigates achieves highest efficiency compared similar previous methods. Code available at https://github.com/TreeLLi/Combating-RO-AdvLC.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

response articles: micro and macro analysis

the present study reports an analysis of response articles in four different disciplines in the social sciences, i.e., linguistics, english for specific purposes (esp), accounting, and psychology. the study has three phases: micro analysis, macro analysis, and e-mail interview. the results of the micro analysis indicate that a three-level linguistic pattern is used by the writers in order to cr...

15 صفحه اول

AR-Boost: Reducing Overfitting by a Robust Data-Driven Regularization Strategy

We introduce a novel, robust data-driven regularization strategy called Adaptive Regularized Boosting (AR-Boost), motivated by a desire to reduce overfitting. We replace AdaBoost’s hard margin with a regularized soft margin that trades-off between a larger margin, at the expense of misclassification errors. Minimizing this regularized exponential loss results in a boosting algorithm that relaxe...

متن کامل

Neural Network Regularization via Robust Weight Factorization

Regularization is essential when training large neural networks. As deep neural networks can be mathematically interpreted as universal function approximators, they are effective at memorizing sampling noise in the training data. This results in poor generalization to unseen data. Therefore, it is no surprise that a new regularization technique, Dropout, was partially responsible for the now-ub...

متن کامل

Covariance structure regularization via entropy loss function

The need to estimate structured covariance matrices arises in a variety of applications and the problem is widely studied in statistics. We propose a new method for regularizing the covariance structure of a given covariance matrix whose underlying structure has been blurred by random noise, particularly when the dimension of the covariance matrix is high. The regularization is made by choosing...

متن کامل

a contrastive analysis of concord and head parameter in english and azerbaijani

این پایان نامه به بررسی و مقایسه دو موضوع مطابقه میان فعل و فاعل (از نظر شخص و مشار) و هسته عبارت در دو زبان انگلیسی و آذربایجانی می پردازد. اول رابطه دستوری مطابقه مورد بررسی قرار می گیرد. مطابقه به این معناست که فعل مفرد به همراه فاعل مفرد و فعل جمع به همراه فاعل جمع می آید. در انگلیسی تمام افعال، بجز فعل بودن (to be) از نظر شمار با فاعلشان فقط در سوم شخص مفرد و در زمان حال مطابقت نشان میدهند...

15 صفحه اول

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Pattern Recognition

سال: 2023

ISSN: ['1873-5142', '0031-3203']

DOI: https://doi.org/10.1016/j.patcog.2022.109229